Texture filtering

The samples in a texture are called 'texels', similar to the samples in an image being called pixels. When shading a pixel, we will often need to use the surface's UV coordinates to look up texel in the texture. This is fairly easy if there is a one-to-one mapping between the image space pixels and the texture space texels.

However, this is rarely the case, since model surfaces will often be transformed and scaled to appear 3D on screen.

In the examples below, consider this texture:

And this pixel grid:

Magnifiation

Magnification is the easiest case to handle. This is when the texel is much larger than the pixel. This happens with the pixel samples are more dense than the texel samples. For example, when a player walks very close to a wall in a first-person game, the wall texture samples are often less dense than the pixel samples.

When querying a magnified texel, it is easy to simply return the texel that maps closest to the pixel's UV coordinates. For increased quality, a few of the nearest texels could be interpolated. This gives a smoother and more pleasant appearance to the resulting texture map. This is called bilinear interpolation, since there are two linear interpolations, one for each texture dimension.

Texels are much larger than pixels, so just grab (or interpolate) nearby texels.

In this case, the gray color should be returned as the color for the black pixel.

Minification

Minification is the harder case. This is when the texels are much smaller than the pixels; the texel samples are more dense than the pixel samples. In minification, many texels contribute to a single pixels color. For example, if a viewer is standing very far away from a wall, all of the wall texels could map to just a few pixels.


In this example, many texels contribute to the black pixel's color.

It is reasonable to make a list of all texels that contribute to a pixel color and find the average color. Unfortunately, textures can have thousands of texels, so this is too costly to compute for each pixel. Instead, some values must be precomputed.

Mip maps

Mip maps are precomputed averages of texels. The mip maps are computed when the texture is created. Then, at runtime, when a minification case is detected, texels are sampled from the closest mip map image.

Example mip map:

To further increase quality, texels can be interpolated from the closest two mip maps. This can be combined with bilinear interpolation to form trilinear interpolation, since there are now multiple 2D texture images, resulting in three dimensions of interpolation.

Advanced filters

The mip map idea handles flat scalings very well. However, models are often off-axis and not directly aligned with the view. This can lead to square pixels mapping to non-square regions in the texture. Consider a floor with a texture map applied: the pixels map to trapezoid regions in the texture. Using mip maps in this case can result in poor sampling and blurry images.

Making average maps for off-axis views of the texture can improve quality. Similar to mip maps, before rendering many images are generated, then at runtime, the image that is the closest map to the texture shape is used for sampling. This filtering is called anisotropic filtering, since it is for textures that are not aligned with the view (not isotropic).

This example only shows maps off a single axis. Modern anisotropics filters include images off both axes.